algorithmic advice
Algorithmic Advice as a Strategic Signal on Competitive Markets
Rebholz, Tobias R., Uphoff, Maxwell, Bernges, Christian H. R., Scholten, Florian
As algorithms increasingly mediate competitive decision-making, their influence extends beyond individual outcomes to shaping strategic market dynamics. In two preregistered experiments, we examined how algorithmic advice affects human behavior in classic economic games with unique, non-collusive, and analytically traceable equilibria. In Experiment 1 (N = 107), participants played a Bertrand price competition with individualized or collective algorithmic recommendations. Initially, collusively upward-biased advice increased prices, particularly when individualized, but prices gradually converged toward equilibrium over the course of the experiment. However, participants avoided setting prices above the algorithm's recommendation throughout the experiment, suggesting that advice served as a soft upper bound for acceptable prices. In Experiment 2 (N = 129), participants played a Cournot quantity competition with equilibrium-aligned or strategically biased algorithmic recommendations. Here, individualized equilibrium advice supported stable convergence, whereas collusively downward-biased advice led to sustained underproduction and supracompetitive profits - hallmarks of tacit collusion. In both experiments, participants responded more strongly and consistently to individualized advice than collective advice, potentially due to greater perceived ownership of the former. These findings demonstrate that algorithmic advice can function as a strategic signal, shaping coordination even without explicit communication. The results echo real-world concerns about algorithmic collusion and underscore the need for careful design and oversight of algorithmic decision-support systems in competitive environments.
- North America > United States > Minnesota (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Leisure & Entertainment (0.68)
- Government > Regional Government > North America Government > United States Government (0.68)
- Banking & Finance (0.66)
Learning When to Advise Human Decision Makers
Artificial intelligence (AI) is increasingly used to support human decision making in high-stake settings in which the human operator, rather than the AI algorithm, needs to make the final decision. For example, in the criminal justice system, algorithmic risk assessments are being used to assist judges in making pretrialrelease decisions and at sentencing and parole [20, 69, 65, 18]; in healthcare, AI algorithms are being used to assist physicians to assess patients' risk factors and to target health inspections and treatments [63, 26, 77, 49]; and in human services, AI algorithms are being used to predict which children are at risk of abuse or neglect, in order to assist decisions made by child-protection staff [79, 16]. In such systems, decisions are often based on risk assessments, and statistical machine-learning algorithms' abilities to excel at prediction tasks [60, 21, 34, 68, 62] are leveraged to provide predictions as advice to human decision makers [45]. For example, the decision that judges make on whether it is safe to release a defendant until his trial, is based on their assessment of how likely this defendant is, if released, to violate his release terms, i.e., to commit another crime until his trial or to fail to appear in court for his trial. For making such risk predictions, judges in the US are assisted by a "risk score" predicted for the defendant by a machine-learning algorithm [20, 69].
- North America > United States > Kentucky (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Middle East > Israel > Southern District > Eilat (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law > Criminal Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
Humans and AI: Organizational Change
According to McKinsey, "Research shows that 70 percent of complex, large-scale change programs don't reach their stated goals. Common pitfalls include a lack of employee engagement, inadequate management support, poor or nonexistent cross-functional collaboration, and a lack of accountability." Last year I was doing some spring cleaning and looking for space in my home office for a digital piano. As I pulled books from my bookcase, packing them into boxes to go into storage, I found my Blockbuster Video membership card. I'd tucked it inside a book as a bookmark.
- Media > Television (0.31)
- Leisure & Entertainment (0.31)
- Information Technology (0.31)
Decision-makers Processing of AI Algorithmic Advice: Automation Bias versus Selective Adherence
Alon-Barkat, Saar, Busuioc, Madalina
Artificial intelligence algorithms are increasingly adopted as decisional aides by public organisations, with the promise of overcoming biases of human decision-makers. At the same time, the use of algorithms may introduce new biases in the human-algorithm interaction. A key concern emerging from psychology studies regards human overreliance on algorithmic advice even in the face of warning signals and contradictory information from other sources (automation bias). A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes (selective adherence). To date, we lack rigorous empirical evidence about the prevalence of these biases in a public sector context. We assess these via two pre-registered experimental studies (N=1,509), simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands. In study 1, we test automation bias by exploring participants adherence to a prediction of teachers performance, which contradicts additional evidence, while comparing between two types of predictions: algorithmic v. human-expert. We do not find evidence for automation bias. In study 2, we replicate these findings, and we also test selective adherence by manipulating the teachers ethnic background. We find a propensity for adherence when the advice predicts low performance for a teacher of a negatively stereotyped ethnic minority, with no significant differences between algorithmic and human advice. Overall, our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.
- North America > United States > District of Columbia > Washington (0.14)
- North America > United States > Kentucky (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.48)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine (1.00)
- (3 more...)
Do People Trust Algorithms More Than Companies Realize?
Since the 1950s, researchers have documented the many types of predictions in which algorithms outperform humans. Algorithms beat doctors and pathologists in predicting the survival of cancer patients, occurrence of heart attacks, and severity of diseases. Algorithms predict recidivism of parolees better than parole boards. And they predict whether a business will go bankrupt better than loan officers. According to anecdotes in a classic book on the accuracy of algorithms, many of these earliest findings were met with skepticism.
- Information Technology (0.70)
- Health & Medicine > Therapeutic Area > Oncology (0.55)
- Transportation > Ground > Road (0.48)
Do People Trust Algorithms More Than Companies Realize?
Since the 1950s, researchers have documented the many types of predictions in which algorithms outperform humans. Algorithms beat doctors and pathologists in predicting the survival of cancer patients, occurrence of heart attacks, and severity of diseases. Algorithms predict recidivism of parolees better than parole boards. And they predict whether a business will go bankrupt better than loan officers. According to anecdotes in a classic book on the accuracy of algorithms, many of these earliest findings were met with skepticism.
- Information Technology (0.70)
- Health & Medicine > Therapeutic Area > Oncology (0.55)
- Transportation > Ground > Road (0.48)